翻訳と辞書
Words near each other
・ "O" Is for Outlaw
・ "O"-Jung.Ban.Hap.
・ "Ode-to-Napoleon" hexachord
・ "Oh Yeah!" Live
・ "Our Contemporary" regional art exhibition (Leningrad, 1975)
・ "P" Is for Peril
・ "Pimpernel" Smith
・ "Polish death camp" controversy
・ "Pro knigi" ("About books")
・ "Prosopa" Greek Television Awards
・ "Pussy Cats" Starring the Walkmen
・ "Q" Is for Quarry
・ "R" Is for Ricochet
・ "R" The King (2016 film)
・ "Rags" Ragland
・ ! (album)
・ ! (disambiguation)
・ !!
・ !!!
・ !!! (album)
・ !!Destroy-Oh-Boy!!
・ !Action Pact!
・ !Arriba! La Pachanga
・ !Hero
・ !Hero (album)
・ !Kung language
・ !Oka Tokat
・ !PAUS3
・ !T.O.O.H.!
・ !Women Art Revolution


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Hierarchical Bayes model : ウィキペディア英語版
Bayesian network


A Bayesian network, Bayes network, belief network, Bayes(ian) model or probabilistic directed acyclic graphical model is a probabilistic graphical model (a type of statistical model) that represents a set of random variables and their conditional dependencies via a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases.
Formally, Bayesian networks are DAGs whose nodes represent random variables in the Bayesian sense: they may be observable quantities, latent variables, unknown parameters or hypotheses. Edges represent conditional dependencies; nodes that are not connected (there is no path from one of the variables to the other in the bayesian network) represent variables that are conditionally independent of each other. Each node is associated with a probability function that takes, as input, a particular set of values for the node's parent variables, and gives (as output) the probability (or probability distribution, if applicable) of the variable represented by the node. For example, if m parent nodes represent m Boolean variables then the probability function could be represented by a table of 2^m entries, one entry for each of the 2^m possible combinations of its parents being true or false. Similar ideas may be applied to undirected, and possibly cyclic, graphs; such are called Markov networks.
Efficient algorithms exist that perform inference and learning in Bayesian networks. Bayesian networks that model sequences of variables (''e.g.'' speech signals or protein sequences) are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams.
==Example==

Suppose that there are two events which could cause grass to be wet: either the sprinkler is on or it's raining. Also, suppose that the rain has a direct effect on the use of the sprinkler (namely that when it rains, the sprinkler is usually not turned on). Then the situation can be modeled with a Bayesian network (shown). All three variables have two possible values, T (for true) and F (for false).
The joint probability function is:
: \mathrm P(G,S,R)=\mathrm P(G\mid S,R)\mathrm P(S\mid R)\mathrm P(R)
where the names of the variables have been abbreviated to ''G = Grass wet (yes/no)'', ''S = Sprinkler turned on (yes/no)'', and ''R = Raining (yes/no)''.
The model can answer questions like "What is the probability that it is raining, given the grass is wet?" by using the conditional probability formula and summing over all nuisance variables:
:
\mathrm P(\mathit=T \mid \mathit=T)
=\frac=T)
}
=\frac}\mathrm P(\mathit=T,\mathit,\mathit=T)
}
\in \} \mathrm P(\mathit=T,\mathit,\mathit)
}

Using the expansion for the joint probability function \mathrm P(G,S,R) and the conditional probabilities from the conditional probability tables (CPTs) stated in the diagram, one can evaluate each term in the sums in the numerator and denominator. For example,
:
\begin
\mathrm P(G=T, & \,S=T,R=T) \\
& = \mathrm P(G=T\mid S=T,R=T)\mathrm P(S=T\mid R=T)\mathrm P(R=T) \\
& = 0.99 \times 0.01 \times 0.2 \\
& = 0.00198.
\end

Then the numerical results (subscripted by the associated variable values) are
:
\begin
\mathrm P(R=T \mid G=T) & =
\frac }
+ 0.1584_ + 0.0_ } \\
& = \frac\approx 35.77 \%.
\end

If, on the other hand, we wish to answer an interventional question: "What is the probability that it would rain, given that we wet the grass?" the answer would be governed by the post-intervention joint distribution function \mathrm P(S,R\mid \text(G=T)) = P(S\mid R) P(R) obtained by removing the factor \mathrm P(G\mid S,R) from the pre-intervention distribution. As expected, the probability of rain is unaffected by the action: \mathrm P(R\mid\text(G=T)) = P(R).
If, moreover, we wish to predict the impact of turning the sprinkler on, we have
: P(R,G\mid \text(S=T)) = P(R)P(G\mid R,S=T)
with the term P(S=T\mid R) removed, showing that the action has an effect on the grass but not on the rain.
These predictions may not be feasible when some of the variables are unobserved, as in most policy evaluation problems. The effect of the action \text(x) can still be predicted, however, whenever a criterion called "back-door" is satisfied.〔〔(【引用サイトリンク】title=The Back-Door Criterion )〕 It states that, if a set ''Z'' of nodes can be observed that ''d''-separates〔(【引用サイトリンク】title=d-Separation without Tears )〕 (or blocks) all back-door paths from ''X'' to ''Y'' then P(Y,Z\mid \text(x)) = P(Y,Z,X=x)/P(X=x\mid Z). A back-door path is one that ends with an arrow into ''X''. Sets that satisfy the back-door criterion are called "sufficient" or "admissible." For example, the set ''Z'' = ''R'' is admissible for predicting the effect of ''S'' = ''T'' on ''G'', because ''R'' ''d''-separate the (only) back-door path
''S'' ← ''R'' → ''G''. However, if ''S'' is not observed, there is no other set that ''d''-separates this path and the effect of turning the sprinkler on (''S'' = ''T'') on the grass (''G'') cannot be predicted from passive observations. We then say that ''P''(''G'' | do(''S'' = ''T'')) is not "identified." This reflects the fact that, lacking interventional data, we cannot determine if the observed dependence between ''S'' and ''G'' is due to a causal connection or is spurious
(apparent dependence arising from a common cause, ''R''). (see Simpson's paradox)
To determine whether a causal relation is identified from an arbitrary Bayesian network with unobserved variables, one can use the three rules of "''do''-calculus"〔
and test whether all ''do'' terms can be removed from the expression of that relation, thus confirming that the desired quantity is estimable from frequency data.〔I. Shpitser, J. Pearl, "Identification of Conditional Interventional Distributions" In R. Dechter and T.S. Richardson (Eds.), ''Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence'', 437–444, Corvallis, OR: AUAI Press, 2006.〕
Using a Bayesian network can save considerable amounts of memory, if the dependencies in the joint distribution are sparse. For example, a naive way of storing the conditional probabilities of 10 two-valued variables as a table requires storage space for 2^ = 1024 values. If the local distributions of no variable depends on more than three parent variables, the Bayesian network representation only needs to store at most 10\cdot2^3 = 80 values.
One advantage of Bayesian networks is that it is intuitively easier for a human to understand (a sparse set of) direct dependencies and local distributions than complete joint distributions.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Bayesian network」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.